专利摘要:
This automatic classification system comprises a data processing unit (20) programmed to classify a vehicle present on images captured by a camera (6), by processing the captured images, the captured images being intensity matrix images and the camera (6) being arranged to capture images of the vehicles in plunge view and three quarter, preferably three quarter face.
公开号:FR3018940A1
申请号:FR1452453
申请日:2014-03-24
公开日:2015-09-25
发明作者:Bruno Lameyre;Jacques Jouannais
申请人:SURVISION;
IPC主号:
专利说明:

[0001] The present invention relates to the field of automatic classification of motor vehicles, in particular with a view to determining the amount of a toll to be paid.
[0002] The access of motor vehicles to certain areas is sometimes conditional on the payment of a toll, the amount of which varies often according to the class of vehicles divided into different classes according to predetermined criteria (length, number of axles, presence of 'a hitch ...). It is possible to provide an automatic vehicle classification system to determine the amount of toll to be paid by each vehicle. FR 2 903 519 A1 discloses an automatic classification system for motor vehicles comprising a laser device combined with a thermal camera to determine the class of vehicles. The laser device measures the length, width and height of the vehicle. The thermal camera makes it possible to estimate the number of rolling axles because of the heat they radiate. EP 2 306 426 A1 discloses an automatic classification system for motor vehicles including time-of-flight cameras capable of capturing a three-dimensional image of the scene to determine the physical characteristics of the vehicle and its class. .
[0003] One of the aims of the present invention is to propose an automatic vehicle classification system that can be implemented simply and economically, while being reliable. For this purpose, the invention proposes an automatic classification system for motor vehicles traveling on a road, comprising a data processing unit programmed to classify a vehicle present on images captured by a camera, by processing the captured images, the images captured being intensity raster images and the camera being arranged to capture images of the vehicles in plunging and three quarter view, preferably three quarter-sided. The classification system may optionally include one or more of the following features: - the data processing unit is programmed to calculate a rectified image from a captured image, so as to restore the parallelism between parallel elements of the real scene and the perpendicularity between perpendicular elements of the real scene; - The calculation of the rectified image realized by applying a predetermined transformation matrix to the captured image; the data processing unit is programmed to calculate a reconstructed image on which a vehicle is visible over its entire length, from a sequence of images on which at least one section of the vehicle appears; the data processing unit is programmed to identify characteristic points of the vehicle appearing in several images of the image sequence and to combine the images of the image sequence according to the identified characteristic points to form the reconstituted image; the data processing unit is programmed to calculate the length and / or the height of the vehicle from the reconstituted image; the data processing unit is programmed to count the number of axles of the vehicle by counting the number of wheels appearing on a lateral face of the vehicle visible from the reconstituted image; the data processing unit is programmed to detect a wheel by means of an ellipse identification algorithm; the data processing unit is programmed to count the number of axles according to predetermined axle configurations; the data processing unit is programmed to detect the entry of a new vehicle into the field of the camera by detection of a license plate on the captured images; - The data processing unit is programmed to detect the separation between a vehicle and the next vehicle according to the photometric characteristics of the road and the signaling. The invention also relates to an automatic vehicle classification system comprising a camera providing intensity matrix images, the camera being arranged to capture images of vehicles traveling on the road with a bird's eye view and three quarters, preferably three quarters face, and a data processing unit programmed for the vehicle classification by processing the images captured by the camera. The invention also relates to a method of automatic classification of motor vehicles traveling on a road, comprising the classification of a vehicle present on images captured by a camera, by processing the images captured by a data processing unit, the captured images. being intensity matrix images and the camera being arranged to capture images of the vehicles in plunging and three-quarter view, preferably three-quarter-sided.
[0004] The invention further relates to a computer program product programmed to implement the above method, when executed by a data processing unit. The invention and its advantages will be better understood on reading the description which will follow, given solely by way of example and with reference to the appended drawings, in which: FIGS. 1 and 2 are schematic side views and from above an automatic classification system; FIGS. 3 and 4 are illustrations of raw images captured by a camera of the automatic classification system on which a vehicle appears; Figures 5 and 6 are illustrations of rectified images obtained by applying a geometric transformation to the raw images of Figures 3 and 4; Figure 7 is an illustration of a reconstructed image obtained by assembling rectified images, including the rectified images of Figures 5 and 6; - Figures 8, 9 and 10 are schematic top views of automatic classification system for multi-lane roads. The classification system 2 of Figures 1 and 2 is arranged to automatically classify vehicles traveling on a taxiway. The classification system 2 comprises a camera 6 arranged on a support provided here in the form of a gantry spanning the taxiway. As a variant, in the case where the number of channels to be analyzed does not exceed two, a simple post for holding the camera is sufficient. The camera 6 is a digital video camera providing two-dimensional (2D) images of the scene present in the field of view 8 of the camera 6.
[0005] The camera 6 has a digital photosensitive sensor, for example a CCD or CMOS sensor. The camera 6 provides matrix images of light intensity, in the spectral band of visible frequencies. The camera 6 provides for example the images of light intensity in a spectral band between 400 nm and 1 micrometer.
[0006] Each image is composed of a matrix of pixels, each pixel being associated with a luminous intensity value. The camera 6 is for example a black-and-white camera providing images where each pixel is associated with a single intensity value, corresponding to a gray level. In a variant, the camera 6 is a color camera, each pixel being associated with several intensity values, each for a respective color.
[0007] The camera 6 is arranged so as to take the vehicles 10 traveling on the taxiway with a bird's eye view and three quarter-sided. The camera 6 is oriented so that the front face 12 and then a side face 14 of a vehicle 10 traveling on the lane 4 appear successively on the images taken by the camera 6.
[0008] The camera 6 is located in height relative to the vehicles. The shooting axis A of the camera 6 is directed obliquely downwards. The shooting axis A of the camera makes a non-zero angle with the horizontal plane and with the vertical direction. The angle α between the shooting axis A and the horizontal plane is between 20 and 35 degrees (Figure 1). The camera is arranged a height Z between 5 meters and 7.5 meters from the level of the taxiway. The camera 6 is offset laterally with respect to the central axis L of the taxiway A. The camera shooting axis A is oblique with respect to the central axis L of the taxiway. In projection in a horizontal plane (FIG. 2), the camera A shooting axis A makes an angle B between 10 and 25 degrees with the central axis L of the taxiway 4. The classification system 2 comprises a data processing unit 20 configured to automatically classify the vehicles appearing on the images taken by the camera 6, exclusively by digital processing of the images taken by the camera 6. As an option, this processing unit can be integrated into the camera 6.
[0009] The data processing unit 20 comprises a processor 22, a memory 24 and a software application stored in the memory 24 and executable by the processor 22. The software application includes software instructions for determining the class of vehicles appearing on the images provided by the camera 6 exclusively by digital processing of the images taken by the camera 6.
[0010] The software application includes one or more image processing algorithms for determining the class of vehicles appearing on the images provided by the camera. Optionally, the classification system 2 comprises a light projector 26 for illuminating the scene in the field of view 8 of the camera 6. The light projector 26 emits for example a light in the non-visible range, in particular an infrared light, for do not dazzle drivers. The light projector 26 emits for example a pulsed light, in synchronism with the image taken by the camera 6, to limit the light emitted to the vehicles. The camera 6 is sensitive to the light of the light projector 26.
[0011] Figures 3 and 4 illustrate raw images taken by the camera one after the other and on which appears the same vehicle 10. On the first raw image appear the front face 12 and a fraction of the side face 14 of the vehicle 10 On the second raw image appears a fraction of the lateral face 14 of the vehicle 10. The data processing unit 20 is programmed to implement a first step of recovery of the raw images provided by the camera 6. Because of the orientation of the camera 6 relative to the traffic lane 4, a perspective effect is present on the images. As a result, the geometric shapes of the objects present on the image are distorted with respect to reality. The recovery step consists of applying a geometric transformation to each raw image to obtain a rectified image on which the parallelism between parallel elements of the scene and the perpendicularity between perpendicular elements of the scene are restored, in particular the parallelism between the elements. parallel elements of the side face 14 of a vehicle 10 and the perpendicularity between the perpendicular elements of the side face 14 of a vehicle 10. The same predetermined geometric transformation is applied to all the images taken by the camera. The geometric transformation is determined a priori in a calibration phase of the classification system 2. The righting step comprises the application of a transformation matrix to the raw image that results in the rectified image. The transformation matrix is a predetermined matrix. The transformation matrix is determined in a calibration phase of the classification system 2. Figures 5 and 6 illustrate the rectified images corresponding to the raw images of Figures 3 and 4 respectively. As shown in Figures 5 and 6, the transformation used to straighten the images from 6 is a homography, that is to say a projection of the projective space on itself. This transformation is given by a 3x3 transformation matrix containing eight degrees of freedom. In the image of FIG. 6, the parallelism, for example between the lower edge 30 and the upper edge 32 of the trailer of the vehicle 10, is restored, and the perpendicularity, for example the upper edge 32 and the rear edge 34 of the trailer of the vehicle 10, is restored. The data processing unit 20 is programmed to implement a second reconstitution step in which images of a sequence of images on each of which appears at least a fraction of a vehicle 10 are assembled to obtain a reconstituted image on which appears the vehicle along its entire length. The reconstitution step is here carried out from a sequence of rectified images.
[0012] The data processing unit 20 is programmed to detect areas of interest on the images of the image sequence. The data processing unit 24 is programmed to implement an algorithm for detecting zones of interest. The point of interest detection algorithm is for example a corner detection algorithm that detects areas of the image where the intensity gradients vary rapidly in several directions simultaneously. The data processing unit 24 is programmed to associate, at each of the points of interest, a characteristic vector of the point so as to be able to mathematically determine the distance separating two points. The point of interest characterization algorithm is for example the first twenty harmonics of a discrete cosine transform (DOT for Direct Cosine Transform) performed on an area of the image surrounding the point of interest. The data processing unit 24 is programmed to form a reconstructed image by assembling the images of the image sequence. This assembly is performed by matching the points of interest detected in two successive images that have identical characterizations. Figure 7 illustrates a reconstructed image of the vehicle obtained by assembling multiple images of a rectified image sequence. Examples of areas of interest used as reference points for assembling the images are circled in FIG. 7. The data processing unit 20 is programmed to implement a third step of measuring the geometrical characteristics of the vehicle from 'images taken by the camera. The measurement step is carried out on the reconstituted image of FIG. 7.
[0013] The measurement step comprises measuring the length L of the vehicle 10, the width I of the vehicle 10 and / or the height H of the vehicle 10. The correspondence between the real dimensions of the vehicle and the dimensions of the vehicle on an image reconstituted is for example determined in a calibration step of the classification system. The dimensions of the vehicle on the reconstituted image are for example determined by a number of pixels. The contours of the vehicle on the reconstituted image are for example determined using an edge detection algorithm. The measuring step comprises measuring the transverse position of the vehicle on the taxiway. The transverse position is determined for example by reference to the traffic lane signaling, in particular the ground marking of the taxiway. The transverse position is here determined with respect to a lateral strip 36 of ground marking. The correspondence between the real dimensions of the vehicle and the dimensions of the vehicle image on a reconstituted image, as a function of the lateral position of the vehicle, is for example determined in a calibration step of the classification system.
[0014] The measuring step includes counting the number of axles of the vehicle. This measurement is performed by detecting the wheels R on the lateral face 14 of the vehicle 10 visible on the images. The detection of the wheels R is carried out by detecting ellipses on the images taken by the camera, here on the reconstituted image (FIG. 7) obtained from the images taken by the camera. The data processing unit 24 is programmed to implement an ellipses recognition algorithm. For example, ellipse detection can be performed using a generalized Hough transform applied to the ellipses or by detecting one of the characteristic configurations of the gradient direction distribution. It is possible that the data processing unit 20 detects false positives, i.e., it detects an ellipse in the image while such an ellipse does not correspond to a wheel of the vehicle. Optionally, counting the number of axles includes comparing the position of the ellipses on the image with pre-recorded reference configurations each corresponding to a possible axle configuration.
[0015] This comparison makes it possible to eliminate false positives in counting the number of axles of a vehicle, by detecting the correspondence of an ellipse group with a referenced configuration and by discarding a supernumerary ellipse, considered to be false positive. The data processing unit 20 is programmed to determine the class of a vehicle appearing on the images taken by the camera as a function of measurements and counting of the number of axles performed by the data processing unit 20. The unit of measure is programmed to compare these measurements and count with a set of predetermined criteria. The data processing unit 20 is programmed to detect each vehicle entering the field of view 8 of the camera 6. The detection of each vehicle is performed by detecting movements having coherent trajectories in the field of view of the camera 6 and / or for example by contour detection. The distinction between two vehicles following one behind the other can be difficult if the two vehicles are too close.
[0016] The data processing unit 20 is programmed to detect the separation between two nearby vehicles. By day, this detection is based on the presence of symmetrical elements such as the calender and / or by detection of a license plate, possibly with a reading of the license plate. At night, detection of separation between two nearby vehicles is based on the detection of the presence of the headlights and on the detection of a license plate, which is visible during the day, as well as at night, because it is reflective to the light The detection of license plate is performed, for example, by detecting the characteristic signature of the gradients of the image around the plate. The reading of the license plate is carried out, for example, by means of an optical character recognition algorithm. At night, the detection of the headlights is carried out, for example, by means of an algorithm for detecting the significant variation in luminance in the images coming from the camera 6. Alternatively or optionally, the detection of the presence of two close vehicles are made from traffic lane reference elements, such as floor marking or safety barriers.
[0017] For example, the detection on images of a sequence of images of different sections of a visible signal element between two zones where the signaling element is masked by the vehicle is a sign that the two zones correspond to two vehicles. close, the visible section on each image corresponding to the interval between these two vehicles.
[0018] The reference signaling elements are detected on the images for example by mask matching. The signaling elements may nevertheless appear differently on the images depending on the ambient conditions (brightness (day, night, sun, rain, etc.) and the camera setting, which may be dynamic. 20 is programmed for the implementation of automatic and continuous learning of the photometric characteristics of the traffic lane and of the adjacent signals, which signal elements appear as fixed elements on the images taken by the camera. The camera 6 takes pictures of the vehicles traveling on the taxiway The data processing unit 20 detects on the images the vehicles 10 traveling on the taxiway 4. When the data processing unit 20 detects a vehicle 10 , it records a sequence of raw images taken by the camera 6 and on which the vehicle 10 appears.
[0019] The data processing unit 20 implements the righting step for straightening the raw images and obtaining the corrected images by geometric transformation of the raw images. The data processing unit 20 detects and characterizes points of interest on the rectified images and assembles the rectified images according to the points of interest, to form a reconstructed image on which the vehicle 10 appears throughout its length. The data processing unit 20 measures the geometric characteristics of the vehicle 20, in particular the length, the width and the height, and counts the number of visible wheels on the visible lateral face 14 of the vehicle 10. The processing unit of FIG. data 20 assigns a class to the vehicle according to the geometric characteristics determined. Optionally, the data processing unit 20 records the sequence of images as proof of the passage of the vehicle, in case of possible dispute.
[0020] The registration is kept for a limited period, during which the challenge is possible. In one embodiment, if the data processing unit 20 determines that it is not possible to automatically classify the vehicle before a sufficient confidence rate, it transmits the recording of the sequence of images, for example for manual processing by an operator. The position of the camera 6 and the orientation of its camera axis A with respect to the taxiway to which the camera 6 is assigned may vary during the installation of the classification system 2. A method of installation The automatic classification system for motor vehicles includes a calibration phase performed once the classification system is installed. The calibration step is performed before the activation of the classification system or during its use. The calibration phase comprises a measurement calibration step for calibrating the measurements provided by the data processing unit 20 as a function of the position of the camera 6. The measurement calibration comprises the taking of images of a vehicle of reference, of known dimensions, circulating on the lane 4, and the calibration of the measurements provided by the data processing unit 20 according to the known dimensions of the reference vehicle.
[0021] Alternatively or optionally, the measurement calibration includes the taking of images of the traffic lane in the absence of a vehicle, the measurement of elements of the scene taken by the camera 6, and the calibration of the measurements on an image. reconstructed the scene according to measurements made on the real stage. The calibration phase comprises a geometric transformation calibration step, to determine the geometric transformation necessary to straighten the raw images, that is to say cancel the perspective effect. The geometric transformation calibration is performed by taking an image of a reference vehicle and graphically designating, for example using the mouse, four points on the reference vehicle forming a rectangle in the real scene. The data processing unit 20 determines the geometric transformation as a function of parallel elements and perpendicular elements of the reference vehicle. Alternatively or optionally the geometric transformation calibration is performed by taking an image of the scene in the field of view of the camera in the absence of a vehicle and by determining the geometric transformation as a function of parallel elements and / or perpendicular elements of the scene.
[0022] As shown in Figure 2, the classification system 2 is configured to classify vehicles traveling on a lane. The camera is located to the left of the track (considering the direction of vehicle movement on the track) or to the right (in dashed lines in Figure 2). The classification system 2 of Figure 7 is configured to classify vehicles traveling on a two-lane road on which vehicles travel in the same direction. The classification system 2 comprises a respective camera 40, 42 assigned to each channel. The camera 40 of the right lane is arranged to the right of this right lane, while the camera 42 of the left lane is disposed to the left of the left lane. This prevents a camera assigned to one path from being masked by a vehicle traveling on the other path. The classification system of Figure 8 is configured to classify vehicles traveling a road to at least three lanes (here exactly three lanes) on which vehicles travel in the same direction.
[0023] The classification system includes at least one respective camera assigned to each channel. A camera 40 assigned to the right lane is arranged to the right of this right lane. A camera 42 assigned to the left channel is disposed to the left of the left channel. A camera 44 assigned to an intermediate channel is arranged on the right of the intermediate channel (solid lines) or on the left of the intermediate channel (dashed lines).
[0024] Optionally, the classification system 2 comprises two cameras 44, 46 assigned to an intermediate channel, a camera being disposed to the right of the intermediate channel and the other camera being disposed to the left of the intermediate channel. The processing unit is programmed to use the images provided by one or other of the two cameras assigned to the same channel depending on whether the images of one camera or the other are exploitable or not, for example because a masking of the lane by a vehicle traveling on another lane. The classification system 2 of Figure 9 differs from that of Figure 8 in that only one camera is assigned to each channel by being arranged on the left of the channel to which it is assigned, or alternatively on the right. This configuration of the cameras implies a risk of masking for all but one channel. Thanks to the invention, the classification system 2 is simple and economical to implement. Indeed, the classification of the vehicles circulating on a way is carried out only by digital processing of intensity matrix images delivered by the camera. The sequences of images from this same camera can also be used as proof of the passage of the classified vehicle. It is not necessary to provide ancillary devices such as magnetic loops, laser scanners or time-of-flight cameras, or thermal imagers whose installation and use is expensive. The classification system 2 can be installed easily and quickly while minimizing the impact on traffic. The digital processing applied to the raw images captured by the camera is simple and makes it possible to classify vehicles reliably. The calibration phase then allows a reliable implementation. The calibration phase is simple to perform.25
权利要求:
Claims (13)
[0001]
CLAIMS1.- An automatic classification system for motor vehicles traveling on a road, comprising a data processing unit (20) programmed to classify a vehicle present on images captured by a camera (6), by processing the captured images, the images captured images being intensity matrix images and the camera (6) being arranged to capture images of the vehicles in plunge view and three quarter, preferably three quarter face.
[0002]
2. Classification system according to claim 1, wherein the data processing unit (20) is programmed to calculate a rectified image from a captured raw image, so as to restore the parallelism on the rectified image. between the parallel elements of the real scene and the perpendicularity between the perpendicular elements of the real scene.
[0003]
The classification system of claim 2, wherein calculating the rectified image is performed by applying a predetermined transformation matrix to the captured image.
[0004]
The classification system according to any one of the preceding claims, wherein the data processing unit (20) is programmed to calculate a reconstructed image on which a vehicle is visible throughout its length, from a sequence of images on which at least one section of the vehicle appears.
[0005]
The classification system according to claim 4, wherein the data processing unit (20) is programmed to identify vehicle characteristic points appearing in a plurality of images in the image sequence and to combine the images of the image sequence. images according to the characteristic points identified to form the reconstructed image.
[0006]
The classification system of claim 4 or 5, wherein the data processing unit (20) is programmed to calculate the length and / or height of the vehicle from the reconstructed image.
[0007]
The classification system of claim 4, 5 or 6, wherein the data processing unit (20) is programmed to count the number of axles of the vehicle by counting the number Loues appearing on a lateral face of the vehicle. visible vehicle from the reconstructed image ..
[0008]
The classification system of claim 7, wherein the data processing unit (20) is programmed to detect a wheel using an ellipse identification algorithm.
[0009]
The classification system of claim 8, wherein the data processing unit (20) is programmed to count the number of axles according to predetermined axle configurations.
[0010]
10. A classification system according to any one of the preceding claims, wherein the data processing unit (20) is programmed to detect the entry of a new vehicle into the field of the camera (6) by detection. of a license plate on the captured images.
[0011]
11. A classification system according to any one of the preceding claims, wherein the data processing unit is programmed to detect the separation between a vehicle and the next vehicle according to the photometric characteristics of the road and the signaling.
[0012]
12. A method for automatic classification of motor vehicles traveling on a road, comprising the classification of a vehicle present on images captured by a camera (6), by processing the images captured by a data processing unit (20), the captured images being intensity matrix images and the camera (6) being arranged to capture images of the vehicles in plunging and three-quarter, preferably three-quarter-face view.
[0013]
13. Computer program product programmed to implement the method of claim 12, when executed by a data processing unit. 20
类似技术:
公开号 | 公开日 | 专利标题
EP2924671A1|2015-09-30|Automatic automotive vehicle classification system
US11270134B2|2022-03-08|Method for estimating distance to an object via a vehicular vision system
JP5867807B2|2016-02-24|Vehicle identification device
KR101417571B1|2014-07-08|Object identification device
AU2015306477B2|2020-12-10|Method and axle-counting device for contact-free axle counting of a vehicle and axle-counting system for road traffic
US20180239970A1|2018-08-23|Adaptive lane marker detection for a vehicular vision system
US8284996B2|2012-10-09|Multiple object speed tracking system
CN107463918A|2017-12-12|Lane line extracting method based on laser point cloud and image data fusion
FR2899332A1|2007-10-05|VISIBILITY FIELD MEASUREMENT DEVICE FOR VEHICLE AND DRIVING ASSISTANCE SYSTEM FOR VEHICLE
US10368063B2|2019-07-30|Optical test device for a vehicle camera and testing method
CN106461387B|2020-11-20|Stereo camera apparatus and vehicle provided with stereo camera
Andreone et al.2002|Vehicle detection and localization in infra-red images
JP2016184316A|2016-10-20|Vehicle type determination device and vehicle type determination method
US10635914B2|2020-04-28|Optical test device for a vehicle camera and testing method
JP5539250B2|2014-07-02|Approaching object detection device and approaching object detection method
CN108351964B|2019-10-18|Pattern recognition device and image-recognizing method
WO2011016257A1|2011-02-10|Distance calculation device for vehicle
WO2011124719A1|2011-10-13|Method for detecting targets in stereoscopic images
JP6685704B2|2020-04-22|Vehicle type identification device and vehicle type identification method
KR101341243B1|2013-12-12|Apparatus and method of restoring image damaged by weather condition
KR20140096576A|2014-08-06|Apparatus and method for detecting vehicle
EP1528409B1|2006-06-21|Method and system for measuring the speed of a vehicle on a curved trajectory
FR2958774A1|2011-10-14|Method for detecting object e.g. obstacle, around lorry from stereoscopic camera, involves classifying object from one image, and positioning object in space around vehicle by projection of object in focal plane of stereoscopic camera
FR2935520A1|2010-03-05|METHOD FOR DETECTING A TARGET OBJECT FOR A MOTOR VEHICLE
WO2013062401A1|2013-05-02|A machine vision based obstacle detection system and a method thereof
同族专利:
公开号 | 公开日
CA2886159A1|2015-09-24|
EP2924671A1|2015-09-30|
FR3018940B1|2018-03-09|
US20150269444A1|2015-09-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2004042673A2|2002-11-04|2004-05-21|Imperial Vision Ltd.|Automatic, real time and complete identification of vehicles|
EP1526492A1|2003-10-22|2005-04-27|Sagem SA|Method and apparatus for identifying a moving vehicle|
FR2903519A1|2006-07-07|2008-01-11|Cs Systemes D Information Sa|Motor vehicle classification system for tariffication and collection of toll fee, has video camera for discriminating vehicles from images captured upstream of passage channel and transmitting output data of vehicles to computer|
EP2306426A1|2009-10-01|2011-04-06|Kapsch TrafficCom AG|Device for detecting vehicles on a traffic surface|
US20130100286A1|2011-10-21|2013-04-25|Mesa Engineering, Inc.|System and method for predicting vehicle location|
WO1993019441A1|1992-03-20|1993-09-30|Commonwealth Scientific And Industrial Research Organisation|An object monitoring system|
JP4054402B2|1997-04-25|2008-02-27|株式会社東芝|X-ray tomography equipment|
US6681195B1|2000-03-22|2004-01-20|Laser Technology, Inc.|Compact speed measurement system with onsite digital image capture, processing, and portable display|
KR100459475B1|2002-04-04|2004-12-03|엘지산전 주식회사|System and method for judge the kind of vehicle|
EP1504276B1|2002-05-03|2012-08-08|Donnelly Corporation|Object detection system for vehicle|
US9412142B2|2002-08-23|2016-08-09|Federal Law Enforcement Development Services, Inc.|Intelligent observation and identification database system|
US20040167861A1|2003-02-21|2004-08-26|Hedley Jay E.|Electronic toll management|
US7747041B2|2003-09-24|2010-06-29|Brigham Young University|Automated estimation of average stopped delay at signalized intersections|
US7262710B2|2004-09-22|2007-08-28|Nissan Motor Co., Ltd.|Collision time estimation apparatus for vehicles, collision time estimation method for vehicles, collision alarm apparatus for vehicles, and collision alarm method for vehicles|
GB0422585D0|2004-10-12|2004-11-10|Trw Ltd|Sensing apparatus and method for vehicles|
JP4650079B2|2004-11-30|2011-03-16|日産自動車株式会社|Object detection apparatus and method|
EP1662440A1|2004-11-30|2006-05-31|IEE INTERNATIONAL ELECTRONICS & ENGINEERING S.A.|Method for determining the position of an object from a digital image|
AU2007359782A1|2007-10-02|2009-04-09|Tele Atlas B.V.|Method of capturing linear features along a reference-line across a surface for use in a map database|
US8842190B2|2008-08-29|2014-09-23|Adobe Systems Incorporated|Method and apparatus for determining sensor format factors from image metadata|
WO2011070640A1|2009-12-07|2011-06-16|クラリオン株式会社|Vehicle periphery image display system|
US8577088B2|2010-08-05|2013-11-05|Hi-Tech Solutions Ltd.|Method and system for collecting information relating to identity parameters of a vehicle|
JP5483120B2|2011-07-26|2014-05-07|アイシン精機株式会社|Vehicle perimeter monitoring system|
US9286516B2|2011-10-20|2016-03-15|Xerox Corporation|Method and systems of classifying a vehicle using motion vectors|
US20150143913A1|2012-01-19|2015-05-28|Purdue Research Foundation|Multi-modal sensing for vehicle|
US10018703B2|2012-09-13|2018-07-10|Conduent Business Services, Llc|Method for stop sign law enforcement using motion vectors in video streams|
EP2907298B1|2012-10-11|2019-09-18|LG Electronics Inc.|Image processing device and image processing method|
JP2014228943A|2013-05-20|2014-12-08|日本電産エレシス株式会社|Vehicular external environment sensing device, and axial shift correction program and method therefor|
AU2014281656A1|2013-06-17|2016-02-04|International Electronic Machines Corporation|Pre-screening for robotic work|
US9177384B2|2013-07-31|2015-11-03|Trimble Navigation Limited|Sequential rolling bundle adjustment|GB2447672B|2007-03-21|2011-12-14|Ford Global Tech Llc|Vehicle manoeuvring aids|
US9513103B2|2011-04-19|2016-12-06|Ford Global Technologies, Llc|Hitch angle sensor assembly|
US9926008B2|2011-04-19|2018-03-27|Ford Global Technologies, Llc|Trailer backup assist system with waypoint selection|
US9854209B2|2011-04-19|2017-12-26|Ford Global Technologies, Llc|Display system utilizing vehicle and trailer dynamics|
US9374562B2|2011-04-19|2016-06-21|Ford Global Technologies, Llc|System and method for calculating a horizontal camera to target distance|
US9434414B2|2011-04-19|2016-09-06|Ford Global Technologies, Llc|System and method for determining a hitch angle offset|
US9723274B2|2011-04-19|2017-08-01|Ford Global Technologies, Llc|System and method for adjusting an image capture setting|
US9683848B2|2011-04-19|2017-06-20|Ford Global Technologies, Llc|System for determining hitch angle|
US9937953B2|2011-04-19|2018-04-10|Ford Global Technologies, Llc|Trailer backup offset determination|
US9335163B2|2011-04-19|2016-05-10|Ford Global Technologies, Llc|Trailer length estimation in hitch angle applications|
US9555832B2|2011-04-19|2017-01-31|Ford Global Technologies, Llc|Display system utilizing vehicle and trailer dynamics|
US10196088B2|2011-04-19|2019-02-05|Ford Global Technologies, Llc|Target monitoring system and method|
US9373044B2|2011-07-25|2016-06-21|Ford Global Technologies, Llc|Trailer lane departure warning system|
US9464887B2|2013-11-21|2016-10-11|Ford Global Technologies, Llc|Illuminated hitch angle detection component|
US9464886B2|2013-11-21|2016-10-11|Ford Global Technologies, Llc|Luminescent hitch angle detection component|
US9296421B2|2014-03-06|2016-03-29|Ford Global Technologies, Llc|Vehicle target identification using human gesture recognition|
US9963004B2|2014-07-28|2018-05-08|Ford Global Technologies, Llc|Trailer sway warning system and method|
US9517668B2|2014-07-28|2016-12-13|Ford Global Technologies, Llc|Hitch angle warning system and method|
US10112537B2|2014-09-03|2018-10-30|Ford Global Technologies, Llc|Trailer angle detection target fade warning|
US9533683B2|2014-12-05|2017-01-03|Ford Global Technologies, Llc|Sensor failure mitigation system and mode management|
US9607242B2|2015-01-16|2017-03-28|Ford Global Technologies, Llc|Target monitoring system with lens cleaning device|
US9522699B2|2015-02-05|2016-12-20|Ford Global Technologies, Llc|Trailer backup assist system with adaptive steering angle limits|
WO2016124409A1|2015-02-05|2016-08-11|Philips Lighting Holding B.V.|Road lighting|
US9728084B2|2015-02-25|2017-08-08|Here Global B.V.|Method and apparatus for providing vehicle classification based on automation level|
US9616923B2|2015-03-03|2017-04-11|Ford Global Technologies, Llc|Topographical integration for trailer backup assist system|
US9804022B2|2015-03-24|2017-10-31|Ford Global Technologies, Llc|System and method for hitch angle detection|
US9821845B2|2015-06-11|2017-11-21|Ford Global Technologies, Llc|Trailer length estimation method using trailer yaw rate signal|
KR101648701B1|2015-06-26|2016-08-17|렉스젠|Apparatus for recognizing vehicle number and method thereof|
US10384607B2|2015-10-19|2019-08-20|Ford Global Technologies, Llc|Trailer backup assist system with hitch angle offset estimation|
US10611407B2|2015-10-19|2020-04-07|Ford Global Technologies, Llc|Speed control for motor vehicles|
US9836060B2|2015-10-28|2017-12-05|Ford Global Technologies, Llc|Trailer backup assist system with target management|
US10017115B2|2015-11-11|2018-07-10|Ford Global Technologies, Llc|Trailer monitoring system and method|
US9798953B2|2015-12-17|2017-10-24|Ford Global Technologies, Llc|Template matching solution for locating trailer hitch point|
US9610975B1|2015-12-17|2017-04-04|Ford Global Technologies, Llc|Hitch angle detection for trailer backup assist system|
US10011228B2|2015-12-17|2018-07-03|Ford Global Technologies, Llc|Hitch angle detection for trailer backup assist system using multiple imaging devices|
US9934572B2|2015-12-17|2018-04-03|Ford Global Technologies, Llc|Drawbar scan solution for locating trailer hitch point|
US10155478B2|2015-12-17|2018-12-18|Ford Global Technologies, Llc|Centerline method for trailer hitch angle detection|
US9827818B2|2015-12-17|2017-11-28|Ford Global Technologies, Llc|Multi-stage solution for trailer hitch angle initialization|
US9796228B2|2015-12-17|2017-10-24|Ford Global Technologies, Llc|Hitch angle detection for trailer backup assist system|
US10005492B2|2016-02-18|2018-06-26|Ford Global Technologies, Llc|Trailer length and hitch angle bias estimation|
US10106193B2|2016-07-01|2018-10-23|Ford Global Technologies, Llc|Enhanced yaw rate trailer angle detection initialization|
US10046800B2|2016-08-10|2018-08-14|Ford Global Technologies, Llc|Trailer wheel targetless trailer angle detection|
US10222804B2|2016-10-21|2019-03-05|Ford Global Technologies, Llc|Inertial reference for TBA speed limiting|
CN107464302A|2017-06-28|2017-12-12|北京易华录信息技术股份有限公司|A kind of electric non-stop toll recording method and system based on vehicle electron identifying|
US10710585B2|2017-09-01|2020-07-14|Ford Global Technologies, Llc|Trailer backup assist system with predictive hitch angle functionality|
US11077795B2|2018-11-26|2021-08-03|Ford Global Technologies, Llc|Trailer angle detection using end-to-end learning|
US10829046B2|2019-03-06|2020-11-10|Ford Global Technologies, Llc|Trailer angle detection using end-to-end learning|
EP3913524A1|2020-05-22|2021-11-24|Kapsch TrafficCom AG|Side view camera detecting wheels|
法律状态:
2016-01-14| PLFP| Fee payment|Year of fee payment: 3 |
2017-02-28| PLFP| Fee payment|Year of fee payment: 4 |
2018-02-20| PLFP| Fee payment|Year of fee payment: 5 |
2019-11-29| ST| Notification of lapse|Effective date: 20191106 |
优先权:
申请号 | 申请日 | 专利标题
FR1452453A|FR3018940B1|2014-03-24|2014-03-24|AUTOMATIC CLASSIFICATION SYSTEM FOR MOTOR VEHICLES|
FR1452453|2014-03-24|FR1452453A| FR3018940B1|2014-03-24|2014-03-24|AUTOMATIC CLASSIFICATION SYSTEM FOR MOTOR VEHICLES|
CA2886159A| CA2886159A1|2014-03-24|2015-03-24|Automatic classification system for cars|
US14/667,577| US20150269444A1|2014-03-24|2015-03-24|Automatic classification system for motor vehicles|
EP15160631.6A| EP2924671A1|2014-03-24|2015-03-24|Automatic automotive vehicle classification system|
[返回顶部]